Goto

Collaborating Authors

 coffee machine


Supplementary Materials: Humans in Kitchens: A Dataset for Multi-Person Human Motion Forecasting with Scene Context

Neural Information Processing Systems

Figure 1: Sample scenes with 3d human poses projected onto camera views for each kitchen. A sample skeleton can be seen in Figure 2. frames: t; frame number in actual dataset time act: t 82; action annotations, where 1 determines an action and 0 its absence. On top of that, SMPL's shape parameter determines limb length ensuring that the body skeleton remains consistent across time. We bear all responsibility in case of violation of rights. Please note that the dataset can be used without the video data.


CIVIL: Causal and Intuitive Visual Imitation Learning

Dai, Yinlong, Sanchez, Robert Ramirez, Jeronimus, Ryan, Sagheb, Shahabedin, Nunez, Cara M., Nemlekar, Heramb, Losey, Dylan P.

arXiv.org Artificial Intelligence

Today's robots attempt to learn new tasks by imitating human examples. These robots watch the human complete the task, and then try to match the actions taken by the human expert. However, this standard approach to visual imitation learning is fundamentally limited: the robot observes what the human does, but not why the human chooses those behaviors. Without understanding which features of the system or environment factor into the human's decisions, robot learners often misinterpret the human's examples. In practice, this results in causal confusion, inefficient learning, and robot policies that fail when the environment changes. We therefore propose a shift in perspective: instead of asking human teachers just to show what actions the robot should take, we also enable humans to intuitively indicate why they made those decisions. Under our paradigm human teachers attach markers to task-relevant objects and use natural language prompts to describe their state representation. Our proposed algorithm, CIVIL, leverages this augmented demonstration data to filter the robot's visual observations and extract a feature representation that aligns with the human teacher. CIVIL then applies these causal features to train a transformer-based policy that -- when tested on the robot -- is able to emulate human behaviors without being confused by visual distractors or irrelevant items. Our simulations and real-world experiments demonstrate that robots trained with CIVIL learn both what actions to take and why to take those actions, resulting in better performance than state-of-the-art baselines. From the human's perspective, our user study reveals that this new training paradigm actually reduces the total time required for the robot to learn the task, and also improves the robot's performance in previously unseen scenarios. See videos at our project website: https://civil2025.github.io


Supplementary Materials: Humans in Kitchens: A Dataset for Multi-Person Human Motion Forecasting with Scene Context

Neural Information Processing Systems

Figure 1: Sample scenes with 3d human poses projected onto camera views for each kitchen. A sample skeleton can be seen in Figure 2. frames: t; frame number in actual dataset time act: t 82; action annotations, where 1 determines an action and 0 its absence. On top of that, SMPL's shape parameter determines limb length ensuring that the body skeleton remains consistent across time. We bear all responsibility in case of violation of rights. Please note that the dataset can be used without the video data.


Real-Time 3D Vision-Language Embedding Mapping

Rauch, Christian, Ellensohn, Björn, Nwankwo, Linus, Dave, Vedant, Rueckert, Elmar

arXiv.org Artificial Intelligence

A. Vision-Language Models in Robotics In contrast to classic closed-set methods trained on specific labels, novel Vision-Language Models (VLMs) enable the open-set association of images with their text descriptions [6], [12], or other modalities [13], via a common embedding space, using individual transformers for image, text, or other modalities. VLMs have been used in robotics for open-set tracking of objects in the current camera FoV [14], for interactive pose estimation of relevant parts of tools [15], and for navigation via hand-drawn instructions [16]. By focusing on a single task and the current FoV, these approaches cannot generalise to other tasks or operate on a global level, such as localising tools outside the current FoV . In contrast, we integrate the open-set VLM embeddings in a task-agnostic 3D representation in order to enable a variety of interactive robotic use-cases on the same vision-language representation. B. Implicit Neural Representations Due to the availability of vast amounts of 2D images and text, Vision Transformers (ViT) are predominantly trained on 2D image data [17].


StealthRank: LLM Ranking Manipulation via Stealthy Prompt Optimization

Tang, Yiming, Fan, Yi, Yu, Chenxiao, Yang, Tiankai, Zhao, Yue, Hu, Xiyang

arXiv.org Machine Learning

The integration of large language models (LLMs) into information retrieval systems introduces new attack surfaces, particularly for adversarial ranking manipulations. We present StealthRank, a novel adversarial ranking attack that manipulates LLM-driven product recommendation systems while maintaining textual fluency and stealth. Unlike existing methods that often introduce detectable anomalies, StealthRank employs an energy-based optimization framework combined with Langevin dynamics to generate StealthRank Prompts (SRPs)-adversarial text sequences embedded within product descriptions that subtly yet effectively influence LLM ranking mechanisms. We evaluate StealthRank across multiple LLMs, demonstrating its ability to covertly boost the ranking of target products while avoiding explicit manipulation traces that can be easily detected. Our results show that StealthRank consistently outperforms state-of-the-art adversarial ranking baselines in both effectiveness and stealth, highlighting critical vulnerabilities in LLM-driven recommendation systems.


Being-0: A Humanoid Robotic Agent with Vision-Language Models and Modular Skills

Yuan, Haoqi, Bai, Yu, Fu, Yuhui, Zhou, Bohan, Feng, Yicheng, Xu, Xinrun, Zhan, Yi, Karlsson, Börje F., Lu, Zongqing

arXiv.org Artificial Intelligence

Building autonomous robotic agents capable of achieving human-level performance in real-world embodied tasks is an ultimate goal in humanoid robot research. Recent advances have made significant progress in high-level cognition with Foundation Models (FMs) and low-level skill development for humanoid robots. However, directly combining these components often results in poor robustness and efficiency due to compounding errors in long-horizon tasks and the varied latency of different modules. We introduce Being-0, a hierarchical agent framework that integrates an FM with a modular skill library. The FM handles high-level cognitive tasks such as instruction understanding, task planning, and reasoning, while the skill library provides stable locomotion and dexterous manipulation for low-level control. To bridge the gap between these levels, we propose a novel Connector module, powered by a lightweight vision-language model (VLM). The Connector enhances the FM's embodied capabilities by translating language-based plans into actionable skill commands and dynamically coordinating locomotion and manipulation to improve task success. With all components, except the FM, deployable on low-cost onboard computation devices, Being-0 achieves efficient, real-time performance on a full-sized humanoid robot equipped with dexterous hands and active vision. Extensive experiments in large indoor environments demonstrate Being-0's effectiveness in solving complex, long-horizon tasks that require challenging navigation and manipulation subtasks. For further details and videos, visit https://beingbeyond.github.io/being-0.


Creativity Has Left the Chat: The Price of Debiasing Language Models

Mohammadi, Behnam

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have revolutionized natural language processing but can exhibit biases and may generate toxic content. While alignment techniques like Reinforcement Learning from Human Feedback (RLHF) reduce these issues, their impact on creativity, defined as syntactic and semantic diversity, remains unexplored. We investigate the unintended consequences of RLHF on the creativity of LLMs through three experiments focusing on the Llama-2 series. Our findings reveal that aligned models exhibit lower entropy in token predictions, form distinct clusters in the embedding space, and gravitate towards "attractor states", indicating limited output diversity. Our findings have significant implications for marketers who rely on LLMs for creative tasks such as copywriting, ad creation, and customer persona generation. The trade-off between consistency and creativity in aligned models should be carefully considered when selecting the appropriate model for a given application. We also discuss the importance of prompt engineering in harnessing the creative potential of base models.


Manipulating Large Language Models to Increase Product Visibility

Kumar, Aounon, Lakkaraju, Himabindu

arXiv.org Artificial Intelligence

Large language models (LLMs) are increasingly being integrated into search engines to provide natural language responses tailored to user queries. Customers and end-users are also becoming more dependent on these models for quick and easy purchase decisions. In this work, we investigate whether recommendations from LLMs can be manipulated to enhance a product's visibility. We demonstrate that adding a strategic text sequence (STS) -- a carefully crafted message -- to a product's information page can significantly increase its likelihood of being listed as the LLM's top recommendation. To understand the impact of STS, we use a catalog of fictitious coffee machines and analyze its effect on two target products: one that seldom appears in the LLM's recommendations and another that usually ranks second. We observe that the strategic text sequence significantly enhances the visibility of both products by increasing their chances of appearing as the top recommendation. This ability to manipulate LLM-generated search responses provides vendors with a considerable competitive advantage and has the potential to disrupt fair market competition. Just as search engine optimization (SEO) revolutionized how webpages are customized to rank higher in search engine results, influencing LLM recommendations could profoundly impact content optimization for AI-driven search services. Code for our experiments is available at https://github.com/aounon/llm-rank-optimizer.


Plug in the Safety Chip: Enforcing Constraints for LLM-driven Robot Agents

Yang, Ziyi, Raman, Shreyas S., Shah, Ankit, Tellex, Stefanie

arXiv.org Artificial Intelligence

Recent advancements in large language models (LLMs) have enabled a new research domain, LLM agents, for solving robotics and planning tasks by leveraging the world knowledge and general reasoning abilities of LLMs obtained during pretraining. However, while considerable effort has been made to teach the robot the "dos," the "don'ts" received relatively less attention. We argue that, for any practical usage, it is as crucial to teach the robot the "don'ts": conveying explicit instructions about prohibited actions, assessing the robot's comprehension of these restrictions, and, most importantly, ensuring compliance. Moreover, verifiable safe operation is essential for deployments that satisfy worldwide standards such as ISO 61508, which defines standards for safely deploying robots in industrial factory environments worldwide. Aiming at deploying the LLM agents in a collaborative environment, we propose a queryable safety constraint module based on linear temporal logic (LTL) that simultaneously enables natural language (NL) to temporal constraints encoding, safety violation reasoning and explaining, and unsafe action pruning. To demonstrate the effectiveness of our system, we conducted experiments in VirtualHome environment and on a real robot. The experimental results show that our system strictly adheres to the safety constraints and scales well with complex safety constraints, highlighting its potential for practical utility.


Self-Supervised Prediction of the Intention to Interact with a Service Robot

Abbate, Gabriele, Giusti, Alessandro, Schmuck, Viktor, Celiktutan, Oya, Paolillo, Antonio

arXiv.org Artificial Intelligence

A service robot can provide a smoother interaction experience if it has the ability to proactively detect whether a nearby user intends to interact, in order to adapt its behavior e.g. by explicitly showing that it is available to provide a service. In this work, we propose a learning-based approach to predict the probability that a human user will interact with a robot before the interaction actually begins; the approach is self-supervised because after each encounter with a human, the robot can automatically label it depending on whether it resulted in an interaction or not. We explore different classification approaches, using different sets of features considering the pose and the motion of the user. We validate and deploy the approach in three scenarios. The first collects $3442$ natural sequences (both interacting and non-interacting) representing employees in an office break area: a real-world, challenging setting, where we consider a coffee machine in place of a service robot. The other two scenarios represent researchers interacting with service robots ($200$ and $72$ sequences, respectively). Results show that, even in challenging real-world settings, our approach can learn without external supervision, and can achieve accurate classification (i.e. AUROC greater than $0.9$) of the user's intention to interact with an advance of more than $3$s before the interaction actually occurs.